Leveraging Coding Techniques for Speeding up Distributed Computing

نویسندگان

  • Konstantinos Konstantinidis
  • Aditya Ramamoorthy
چکیده

Large scale clusters leveraging distributed computing frameworks such as MapReduce routinely process data that are on the orders of petabytes or more. The sheer size of the data precludes the processing of the data on a single computer. The philosophy in these methods is to partition the overall job into smaller tasks that are executed on different servers; this is called the map phase. This is followed by a data shuffling phase where appropriate data is exchanged between the servers. The final so-called reduce phase, completes the computation. One potential approach, explored in prior work for reducing the overall execution time is to operate on a natural tradeoff between computation and communication. Specifically, the idea is to run redundant copies of map tasks that are placed on judiciously chosen servers. The shuffle phase exploits the location of the nodes and utilizes coded transmission. The main drawback of this approach is that it requires the original job to be split into a number of map tasks that grows exponentially in the system parameters. This is problematic, as we demonstrate that splitting jobs too finely can in fact adversely affect the overall execution time. In this work we show that one can simultaneously obtain low communication loads while ensuring that jobs do not need to be split too finely. Our approach uncovers a deep relationship between this problem and a class of combinatorial structures called resolvable designs. Appropriate interpretation of resolvable designs can allow for the development of coded distributed computing schemes where the splitting levels are exponentially lower than prior work. We present experimental results obtained on Amazon EC2 clusters for a widely known distributed algorithm, namely TeraSort. We obtain over 4.69× improvement in speedup over the baseline approach and more than 2.6× over current state of the art. Disciplines Electrical and Computer Engineering | Systems and Communications Comments This is a manuscript of Konstantinidis, K., & Ramamoorthy, A. (2018). Leveraging Coding Techniques for Speeding up Distributed Computing. arXiv preprint arXiv:1802.03049. Posted with permission. This article is available at Iowa State University Digital Repository: https://lib.dr.iastate.edu/ece_pubs/164 Leveraging Coding Techniques for Speeding up Distributed Computing Konstantinos Konstantinidis and Aditya Ramamoorthy Department of Electrical and Computer Engineering Iowa State University Ames, IA 50010 Emails: {kostas, adityar}@iastate.edu Abstract—Over the last decade, distributed computing frameworks such as MapReduce, Hadoop and Spark have become ubiquitous. Large scale clusters routinely process data that are on the orders of petabytes or more. The sheer size of the data precludes the processing of the data on a single computer. The philosophy in these methods is to partition the overall job into smaller tasks that are executed on different servers; this is called the map phase. This is followed by a data shuffling phase where appropriate data is exchanged between the servers. The final so-called reduce phase, completes the computation. One potential approach, explored in prior work for reducing the overall execution time is to operate on a natural tradeoff between computation and communication. Specifically, the idea is to run redundant copies of map tasks that are placed on judiciously chosen servers. The shuffle phase exploits the location of the nodes and utilizes coded transmission. The main drawback of this approach is that it requires the original job to be split into a number of map tasks that grows exponentially in the system parameters. This is problematic, as we demonstrate that splitting jobs too finely can in fact adversely affect the overall execution time. In this work we show that one can simultaneously obtain low communication loads while ensuring that jobs do not need to be split too finely. Our approach uncovers a deep relationship between this problem and a class of combinatorial structures called resolvable designs. Appropriate interpretation of resolvable designs can allow for the development of coded distributed computing schemes where the splitting levels are exponentially lower than prior work. We present experimental results obtained on Amazon EC2 clusters for a widely known distributed algorithm, namely TeraSort. We obtain over 4.69× improvement in speedup over the baseline approach and more than 2.6× over current state of the art.Over the last decade, distributed computing frameworks such as MapReduce, Hadoop and Spark have become ubiquitous. Large scale clusters routinely process data that are on the orders of petabytes or more. The sheer size of the data precludes the processing of the data on a single computer. The philosophy in these methods is to partition the overall job into smaller tasks that are executed on different servers; this is called the map phase. This is followed by a data shuffling phase where appropriate data is exchanged between the servers. The final so-called reduce phase, completes the computation. One potential approach, explored in prior work for reducing the overall execution time is to operate on a natural tradeoff between computation and communication. Specifically, the idea is to run redundant copies of map tasks that are placed on judiciously chosen servers. The shuffle phase exploits the location of the nodes and utilizes coded transmission. The main drawback of this approach is that it requires the original job to be split into a number of map tasks that grows exponentially in the system parameters. This is problematic, as we demonstrate that splitting jobs too finely can in fact adversely affect the overall execution time. In this work we show that one can simultaneously obtain low communication loads while ensuring that jobs do not need to be split too finely. Our approach uncovers a deep relationship between this problem and a class of combinatorial structures called resolvable designs. Appropriate interpretation of resolvable designs can allow for the development of coded distributed computing schemes where the splitting levels are exponentially lower than prior work. We present experimental results obtained on Amazon EC2 clusters for a widely known distributed algorithm, namely TeraSort. We obtain over 4.69× improvement in speedup over the baseline approach and more than 2.6× over current state of the art.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Speeding up distributed storage and computing systems using codes

Speeding up distributed storage and computing systems using codes

متن کامل

Distributed computing and modeling & simulation: speeding up simulations and creating large models

Distributed computing has many opportunities for Modeling and Simulation (M&S). Grid computing approaches have been developed that can use multiple computers to reduce the processing time of an application. In terms of M&S this means simulations can be run very quickly by distributing individual runs over locally or remotely available computing resources. Distributed simulation techniques allow...

متن کامل

The ALEXSYS Mortgage Pool Allocation Expert System: A Case Study of Speeding Up Rule-based Programs

It is frequently the case, however, that rule-based systems are used for rapidly prototyping knowledge-based applications, but then are reimplemented in more efficient imperative languages to improve the speed of operation of the final delivered product. A number of researchers, however, including us, have been studying an alternative path to speeding up rule-based systems by means of parallel ...

متن کامل

Cellular Microscopic Pattern Recogniser - A Distributed Computational Approach for Macroscopic Event Detection in WSN

A light weight one-shot learning pattern recognition scheme, known as Cellular Microscopic Pattern Recogniser (CMPR) is proposed that will lead to macroscopic event detection with wireless sensor networks (WSN). The scheme simplifies computations for energy conservation and speeds up recognition by leveraging the parallel distributed processing capabilities of WSN. Experimental results show tha...

متن کامل

Entropy-based Consensus for Distributed Data Clustering

The increasingly larger scale of available data and the more restrictive concerns on their privacy are some of the challenging aspects of data mining today. In this paper, Entropy-based Consensus on Cluster Centers (EC3) is introduced for clustering in distributed systems with a consideration for confidentiality of data; i.e. it is the negotiations among local cluster centers that are used in t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1802.03049  شماره 

صفحات  -

تاریخ انتشار 2018